Goto

Collaborating Authors

 optimal guarantee


Optimal Guarantees for Algorithmic Reproducibility and Gradient Complexity in Convex Optimization

Neural Information Processing Systems

Algorithmic reproducibility measures the deviation in outputs of machine learning algorithms upon minor changes in the training process. Previous work suggests that first-order methods would need to trade-off convergence rate (gradient complexity) for better reproducibility. In this work, we challenge this perception and demonstrate that both optimal reproducibility and near-optimal convergence guarantees can be achieved for smooth convex minimization and smooth convex-concave minimax problems under various error-prone oracle settings. Particularly, given the inexact initialization oracle, our regularization-based algorithms achieve the best of both worlds -- optimal reproducibility and near-optimal gradient complexity -- for minimization and minimax optimization. With the inexact gradient oracle, the near-optimal guarantees also hold for minimax optimization. Additionally, with the stochastic gradient oracle, we show that stochastic gradient descent ascent is optimal in terms of both reproducibility and gradient complexity. We believe our results contribute to an enhanced understanding of the reproducibility-convergence trade-off in the context of convex optimization.


Reviews: UniXGrad: A Universal, Adaptive Algorithm with Optimal Guarantees for Constrained Optimization

Neural Information Processing Systems

The techniques seem interesting and may be useful for other analyses. I liked the use of optimism here and I think this in particular is an under-utilized type of guarantee that might have more applications. I found even the proofs in the appendices to be fairly painless to follow!



Optimal Guarantees for Algorithmic Reproducibility and Gradient Complexity in Convex Optimization

Neural Information Processing Systems

Algorithmic reproducibility measures the deviation in outputs of machine learning algorithms upon minor changes in the training process. Previous work suggests that first-order methods would need to trade-off convergence rate (gradient complexity) for better reproducibility. In this work, we challenge this perception and demonstrate that both optimal reproducibility and near-optimal convergence guarantees can be achieved for smooth convex minimization and smooth convex-concave minimax problems under various error-prone oracle settings. Particularly, given the inexact initialization oracle, our regularization-based algorithms achieve the best of both worlds -- optimal reproducibility and near-optimal gradient complexity -- for minimization and minimax optimization. With the inexact gradient oracle, the near-optimal guarantees also hold for minimax optimization.


UniXGrad: A Universal, Adaptive Algorithm with Optimal Guarantees for Constrained Optimization

Neural Information Processing Systems

We propose a novel adaptive, accelerated algorithm for the stochastic constrained convex optimization setting.Our method, which is inspired by the Mirror-Prox method, \emph{simultaneously} achieves the optimal rates for smooth/non-smooth problems with either deterministic/stochastic first-order oracles. This is done without any prior knowledge of the smoothness nor the noise properties of the problem. To the best of our knowledge, this is the first adaptive, unified algorithm that achieves the optimal rates in the constrained setting. We demonstrate the practical performance of our framework through extensive numerical experiments.


Causal Influence Maximization in Hypergraph

Su, Xinyan, Zhang, Zhiheng

arXiv.org Artificial Intelligence

Influence Maximization (IM) is the task of selecting a fixed number of seed nodes in a given network to maximize dissemination benefits. Although the research for efficient algorithms has been dedicated recently, it is usually neglected to further explore the graph structure and the objective function inherently. With this motivation, we take the first attempt on the hypergraph-based IM with a novel causal objective. We consider the case that each hypergraph node carries specific attributes with Individual Treatment Effect (ITE), namely the change of potential outcomes before/after infections in a causal inference perspective. In many scenarios, the sum of ITEs of the infected is a more reasonable objective for influence spread, whereas it is difficult to achieve via current IM algorithms. In this paper, we introduce a new algorithm called \textbf{CauIM}. We first recover the ITE of each node with observational data and then conduct a weighted greedy algorithm to maximize the sum of ITEs of the infected. Theoretically, we mainly present the generalized lower bound of influence spread beyond the well-known $(1-\frac{1}{e})$ optimal guarantee and provide the robustness analysis. Empirically, in real-world experiments, we demonstrate the effectiveness and robustness of \textbf{CauIM}. It outperforms the previous IM and randomized methods significantly.


UniXGrad: A Universal, Adaptive Algorithm with Optimal Guarantees for Constrained Optimization

Kavis, Ali, Levy, Kfir Y., Bach, Francis, Cevher, Volkan

Neural Information Processing Systems

We propose a novel adaptive, accelerated algorithm for the stochastic constrained convex optimization setting.Our method, which is inspired by the Mirror-Prox method, \emph{simultaneously} achieves the optimal rates for smooth/non-smooth problems with either deterministic/stochastic first-order oracles. This is done without any prior knowledge of the smoothness nor the noise properties of the problem. To the best of our knowledge, this is the first adaptive, unified algorithm that achieves the optimal rates in the constrained setting. We demonstrate the practical performance of our framework through extensive numerical experiments. Papers published at the Neural Information Processing Systems Conference.